recent

MIT Sloan reading list: 7 books from 2024

‘Energy poverty’ hits US residents more in the South and Southwest

To help improve the accuracy of generative AI, add speed bumps

Credit: Carolyn Geason-Beissel/MIT SMR | Getty Images

Ideas Made to Matter

Artificial Intelligence

Making generative AI work in the enterprise: New from MIT Sloan Management Review

By

With artificial intelligence making its way into everyday work, enterprise leaders need to rethink how they manage people, processes, and projects. The latest insights from MIT Sloan Management Review describe how to reorganize the way individuals and teams work amid the presence of AI, along with how to use nudges to encourage AI users to check outputs for accuracy. There are also lessons to learn from a leading adopter of AI that has opted to emphasize enablement of innovation rather than governance that can restrict it.

Reorganize work around generative AI before generative AI does it for you

Organizational models managed to survive previous advancements, from the assembly line to the internet, in large part because technology could contribute only so much intelligence. Ethan Mollick, SM ’04, PhD ’10, an associate professor at the University of Pennsylvania, writes that generative AI and the large language models it’s based on are different because they work at a human scale. At the Wharton School, for example, team members in its Wharton Interactive software startup use AI tools to generate documentation, code, marketing material, and more — all at a cost of less than $100 a month.

Amid this potential for disrupting work, Mollick provides three principles for reorganizing work around how AI is used today and how it will be used tomorrow.

Enlist current AI users. Recognize that these users could be at any level in the organization and likely are using AI without their managers be aware of it. Reassure them that using AI won’t result in losing job responsibilities or wages and that AI won’t be used to monitor their every move.

Let teams develop their own methods. Typical processes that centralize project management, innovation, and strategy go far too slowly for AI. Meanwhile, consultants often struggle to determine how AI will work for a specific company. Individual teams are best suited to determining how they work best with AI.

Anticipate rapid change. Traditional top-down change management models will fail to anticipate advances in AI models. Leaders need to adapt work processes to AI with one eye on current use cases and the other on emerging models.

Read: Reinventing the organization for Generative AI and Large language models

How beneficial friction helps users find generative AI errors

Recent research from MIT Sloan senior lecturer and research scientist along with MIT doctoral candidate Haiwen Li, suggests nudges can help users spot errors in text from generative AI tools, with little impact on a user’s efficiency.

In the study, a collaboration between MIT and Accenture, users wrote executive summaries of company profiles, complete with references to available sources. Participants had access to text outputs from the generative AI tool ChatGPT and were assigned to one of three groups. One received text outputs with likely errors or omissions highlighted in specific colors, one received no such highlights, and one received outputs with likely correct passages, as well as likely errors and omissions, highlighted.

Users in the group receiving nudges in the form of highlighted text missed fewer errors and detected more omissions. Those receiving nudges that noted correct passages spent more time completing the task, but the difference in the amount of time taken wasn’t statistically significant for those receiving only nudges associated with errors. The nudges are a form of what the researchers call “beneficial friction.”

Gosline, Li, and their co-authors offer three takeaways from these findings:

  • Given participants’ willingness to use up to 80% of a model’s output, the prompt (the input into the generative AI tool) should be considered carefully.
  • Making errors more conspicuous can mitigate users’ overconfidence in their ability to spot errors.
  • Experimentation is essential to understanding how humans react to outputs from machine learning tools.

Read: Nudge users to catch generative AI errors

Shift the AI focus from governance to enablement

AI has made inroads into health care with its potential to improve workflows across organizations, whether it’s streamlining administrative processes or helping doctors make a diagnosis faster. MIT Initiative on the Digital Economy fellow Thomas H. Davenport and co-author Randy Bean write about the Mayo Clinic’s experience with AI models, going so far as to describe the Minnesota-based integrated health system as “the most aggressive adopter of AI among U.S. health care providers.”

One key to the Mayo Clinic’s success is shifting its focus on implementing AI from governance to enablement. While governance dictates what can and cannot be done, enablement emphasizes giving employees the latitude to build and test AI models applicable to their domain. (It helps that clinical staffs, by their nature, are oriented to quantitative thinking, Davenport and Bean point out.)

Mayo has a 60-person team supporting AI and data enablement and managing the organization’s data library. The group offers internal users a platform for building AI products and applications as well as for gathering information necessary for obtaining U.S. Food and Drug Administration approval. That said, end users are responsible for ensuring data quality and integration, as they’re more familiar with the data and its value. This complements rather than complicates enablement, as it ensures that others throughout the organization can incorporate high-quality data into their AI models.

Read: Mayo Clinic’s healthy model for AI success

For more info Zach Church Editorial & Digital Media Director (617) 324-0804